Learning effective motion features is an essential pursuit of video representation learning. This paper presents a simple yet effective sample construction strategy to boost the learning of motion features in video contrastive learning. The proposed method, dubbed Motion-focused Quadruple Construction (MoQuad), augments the instance discrimination by meticulously disturbing the appearance and motion of both the positive and negative samples to create a quadruple for each video instance, such that the model is encouraged to exploit motion information. Unlike recent approaches that create extra auxiliary tasks for learning motion features or apply explicit temporal modelling, our method keeps the simple and clean contrastive learning paradigm (i.e.,SimCLR) without multi-task learning or extra modelling. In addition, we design two extra training strategies by analyzing initial MoQuad experiments. By simply applying MoQuad to SimCLR, extensive experiments show that we achieve superior performance on downstream tasks compared to the state of the arts. Notably, on the UCF-101 action recognition task, we achieve 93.7% accuracy after pre-training the model on Kinetics-400 for only 200 epochs, surpassing various previous methods
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
In recent years, object detection has achieved a very large performance improvement, but the detection result of small objects is still not very satisfactory. This work proposes a strategy based on feature fusion and dilated convolution that employs dilated convolution to broaden the receptive field of feature maps at various scales in order to address this issue. On the one hand, it can improve the detection accuracy of larger objects. On the other hand, it provides more contextual information for small objects, which is beneficial to improving the detection accuracy of small objects. The shallow semantic information of small objects is obtained by filtering out the noise in the feature map, and the feature information of more small objects is preserved by using multi-scale fusion feature module and attention mechanism. The fusion of these shallow feature information and deep semantic information can generate richer feature maps for small object detection. Experiments show that this method can have higher accuracy than the traditional YOLOv3 network in the detection of small objects and occluded objects. In addition, we achieve 32.8\% Mean Average Precision on the detection of small objects on MS COCO2017 test set. For 640*640 input, this method has 88.76\% mAP on the PASCAL VOC2012 dataset.
translated by 谷歌翻译
The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information (e.g., face, body shape) of the target mesh. Deep learning-based methods improved the efficiency and performance of 3D pose transfer. However, most of them are trained under the supervision of the ground truth, whose availability is limited in real-world scenarios. In this work, we present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer. In X-DualNet, we introduce a generator $G$ which contains correspondence learning and pose transfer modules to achieve 3D pose transfer. We learn the shape correspondence by solving an optimal transport problem without any key point annotations and generate high-quality meshes with our elastic instance normalization (ElaIN) in the pose transfer module. With $G$ as the basic component, we propose a cross consistency learning scheme and a dual reconstruction objective to learn the pose transfer without supervision. Besides that, we also adopt an as-rigid-as-possible deformer in the training process to fine-tune the body shape of the generated results. Extensive experiments on human and animal data demonstrate that our framework can successfully achieve comparable performance as the state-of-the-art supervised approaches.
translated by 谷歌翻译
The formalization of existing mathematical proofs is a notoriously difficult process. Despite decades of research on automation and proof assistants, writing formal proofs remains arduous and only accessible to a few experts. While previous studies to automate formalization focused on powerful search algorithms, no attempts were made to take advantage of available informal proofs. In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems. We investigate two relevant setups where informal proofs are either written by humans or generated by a language model. Our experiments and ablation studies show that large language models are able to produce well-structured formal sketches that follow the same reasoning steps as the informal proofs. Guiding an automated prover with these sketches enhances its performance from 20.9% to 39.3% on a collection of mathematical competition problems.
translated by 谷歌翻译
最近,深度学习方法已经在许多医学图像分割任务中实现了最先进的表现。其中许多是基于卷积神经网络(CNN)。对于这种方法,编码器是从输入图像中提取全局和局部信息的关键部分。然后将提取的特征传递给解码器以预测分割。相比之下,最近的几部作品显示了使用变压器的卓越性能,可以更好地对远程空间依赖性进行建模并捕获低级细节。但是,对于某些任务无法有效替换基于卷积的编码器的某些任务,变形金刚作为唯一的编码器表现不佳。在本文中,我们提出了一个带有双重编码器的模型,用于3D生物医学图像分割。我们的模型是带有独立变压器编码器的U形CNN。我们融合了卷积编码器和变压器的信息,并将其传递给解码器以获得结果。我们从三个不同的挑战中评估了三个公共数据集上的方法:BTCV,MODA和DECHANLON。与在每个任务上有和没有变压器的最先进模型相比,我们提出的方法在整个方面都获得了更高的骰子分数。
translated by 谷歌翻译
由于其广泛的应用,尤其是在现场理解领域,因此在3D点云上进行的实例细分一直在吸引越来越多的关注。但是,大多数现有方法都需要完全注释培训数据。在点级的手动准备地面真相标签非常繁琐且劳动密集型。为了解决这个问题,我们提出了一种新颖的弱监督方法RWSEG,该方法仅需要用一个点标记一个对象。有了这些稀疏的标签,我们使用自我注意事项和随机步行引入了一个带有两个分支的统一框架,分别将语义和实例信息分别传播到未知区域。此外,我们提出了一个跨画竞争的随机步行(CGCRW)算法,该算法鼓励不同实例图之间的竞争以解决紧密放置对象中的歧义并改善实例分配的性能。 RWSEG可以生成定性实例级伪标签。 Scannet-V2和S3DIS数据集的实验结果表明,我们的方法通过完全监督的方法实现了可比的性能,并且通过大幅度优于先前的弱监督方法。这是弥合该地区弱和全面监督之间差距的第一项工作。
translated by 谷歌翻译
对应用深神网络自动解释和分析12铅心电图(ECG)的兴趣增加了。机器学习方法的当前范例通常受到标记数据量的限制。对于临床上的数据,这种现象尤其有问题,在该数据中,根据所需的专业知识和人类努力,规模标签可能是耗时且昂贵的。此外,深度学习分类器可能容易受到对抗性例子和扰动的影响,例如在医疗,临床试验或保险索赔的背景下应用时,可能会带来灾难性的后果。在本文中,我们提出了一种受生理启发的数据增强方法,以提高性能并根据ECG信号提高心脏病检测的鲁棒性。我们通过将数据分布驱动到瓦斯坦斯坦空间中的大地测量中的其他类别来获得增强样品。为了更好地利用领域特定的知识,我们设计了一个基础指标,该指标识别基于生理确定的特征的ECG信号之间的差异。从12铅ECG信号中学习,我们的模型能够区分五种心脏条件。我们的结果表明,准确性和鲁棒性的提高,反映了我们数据增强方法的有效性。
translated by 谷歌翻译
人行道表面数据的获取和评估在路面条件评估中起着至关重要的作用。在本文中,提出了一个称为RHA-NET的自动路面裂纹分割的有效端到端网络,以提高路面裂纹分割精度。 RHA-NET是通过将残留块(重阻)和混合注意块集成到编码器架构结构中来构建的。这些重组用于提高RHA-NET提取高级抽象特征的能力。混合注意块旨在融合低级功能和高级功能,以帮助模型专注于正确的频道和裂纹区域,从而提高RHA-NET的功能表现能力。构建并用于训练和评估所提出的模型的图像数据集,其中包含由自设计的移动机器人收集的789个路面裂纹图像。与其他最先进的网络相比,所提出的模型在全面的消融研究中验证了添加残留块和混合注意机制的功能。此外,通过引入深度可分离卷积生成的模型的轻加权版本可以更好地实现性能和更快的处理速度,而U-NET参数数量的1/30。开发的系统可以在嵌入式设备Jetson TX2(25 fps)上实时划分路面裂纹。实时实验拍摄的视频将在https://youtu.be/3xiogk0fig4上发布。
translated by 谷歌翻译
最近,已经提出了许多有效的变压器,以降低由软磁性注意引起的标准变压器的二次计算复杂性。但是,他们中的大多数只是用有效的注意机制交换SoftMax,而无需考虑定制的体系结构,特别是为了有效的关注。在本文中,我们认为手工制作的香草变压器体系结构可用于软马克斯的注意力可能不适合有效的变压器。为了解决这个问题,我们提出了一个新框架,通过神经体系结构搜索(NAS)技术找到有效变压器的最佳体系结构。提出的方法在流行的机器翻译和图像分类任务上进行了验证。我们观察到,与标准变压器相比,有效变压器的最佳体系结构的计算降低,但总体准确性较低。这表明SoftMax的注意力和有效的注意力具有自己的区别,但它们都无法同时平衡准确性和效率。这激发了我们混合两种注意力以减少性能失衡。除了现有NAS变压器方法中常用的搜索空间外,我们还提出了一个新的搜索空间,该空间允许NAS算法与架构一起自动搜索注意变体。 WMT'EN-DE和CIFAR-10上的广泛实验表明,我们的搜索架构与标准变压器保持了可比的精度,并具有明显提高的计算效率。
translated by 谷歌翻译